ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Size: px
Start display at page:

Download "ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet."

Transcription

1 S 88 Summer 205 Introduction to rtificial Intelligence Final ˆ You have approximately 2 hours 50 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. ˆ Mark your answers ON THE EXM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. ll short answer sections can be successfully answered in a few sentences T MOST. First name Last name SID edx username Name of person on your left Name of person on your right For staff use only: Q. Search and Probability /0 Q2. Games /8 Q. Utilities /0 Q4. Farmland SP /8 Q5. MDP /6 Q6. ayes Nets /8 Q7. hameleon /0 Q8. Perceptron /0 Total /80

2 THIS PGE IS INTENTIONLLY LEFT LNK

3 Q. [0 pts] Search and Probability Each True/False question is worth points. Leaving a question blank is worth 0 points. nswering incorrectly is worth points. (a) onsider a graph search problem where for every action, the cost is at least ɛ, with ɛ > 0. ssume the heuristic is admissible. (i) [ pt] [true or false] Uniform-cost graph search is guaranteed to return an optimal solution. True. US expands paths in order of least total cost so that the optimal solution is found. (ii) [ pt] [true or false] The path returned by uniform-cost graph search may change if we add a positive constant to every step cost. True. onsider that there are two paths from the start state (S) to the goal (G), S G and S G. cost(s, ) =, cost(, G) =, and cost(s, G) =. So the optimal path is through. Now, if we add 2 to each of the costs, the optimal path is directly from S to G. Since uniform cost search finds the optimal path, its path will change. (iii) [ pt] [true or false] * graph search is guaranteed to return an optimal solution. False, the heuristic is admissible, but is not guaranteed to be consistent, which is required for optimal graph search. (iv) [ pt] [true or false] * graph search is guaranteed to expand no more nodes than depth-first graph search. False. Depth-first graph search could, for example, go directly to a sub-optimal solution. (v) [ pt] [true or false] If h (s) and h 2 (s) are two admissible heuristics, then their average f(s) = 2 h (s) + 2 h 2(s) must also be admissible. True. Let h (s) be the true distance from s. We know that h (s) h (s) and h 2 (s) h (s), thus h avg (s) = 2 h (s) + 2 h 2(s) 2 h (s) + 2 h (s) = h (s) (b) [ pts],,, and D are random variables with binary domains. How many entries are in the following probability tables and what is the sum of the values in each table? Write a? in the box if there is not enough information given. Table Size Sum P ( ) 4 2 P (, D + b, +c) 4 P ( + a,, D) 8 4 (c) [2 pts] Write all the possible chain rule expansions of the joint probability P (a, b, c). No conditional independence assumptions are made. P (a)p (b a)p (c a, b), P (a)p (c a)p (b a, c), P (b)p (a b)p (c a, b), P (b)p (c b)p (a b, c), P (c)p (b c)p (a b, c), P (c)p (a c)p (b a, c)

4 Q2. [8 pts] Games For the following game tree, each player maximizes their respective utility. Let x, y respectively denote the top and bottom values in a node. Player uses the utility function U (x, y) = x. P P2 P2 P (a) oth players know that Player 2 uses the utility function U 2 (x, y) = x y. (i) [2 pts] Fill in the rectangles in the figure above with pair of values returned by each max node. From top-down, left-right: (6, 2), (6, 2), (, 0), (5, ) (ii) [2 pts] You want to save computation time by using pruning in your game tree search. On the game tree above, put an X on branches that do not need to be explored or simply write None. ssume that branches are explored from left to right. None. 4

5 Figure repeated for convenience P P2 P2 P (b) Now assume Player 2 changes their utility function based on their mood. The probabilities of Player 2 s utilities and mood are described in the following table. Let M, U respectively denote the mood and utility function of Player 2. P (M = happy) a P (M = mad) b M = happy M = mad P (U 2 (x, y) = x M) c f P (U 2 (x, y) = x y M) d g P (U 2 (x, y) = x 2 + y 2 M) e h (i) [4 pts] alculate the maximum expected utility of the game for Player in terms of the values in the game tree and the tables. It may be useful to record and label your intermediate calculations. You may write your answer in terms of a max function. We first calculate the new probabilities of each utility function as follows. P (U 2 (x, y) = x) P (U 2 (x, y) = x y) P (U 2 (x, y) = x 2 + y 2 ) ac + bf ad + bg ae + bh EU(Left ranch) = (ac + bf)() + (ad + bg)(6) + (ae + bh)(6) EU(Middle ranch) = (ac + bf)() + (ad + bg)() + (ae + bh)(7) EU(Right ranch) = (ac + bf)() + (ad + bg)(5) + (ae + bh)() MEU(φ) = max((ac + bf)() + (ad + bg)(6) + (ae + bh)(6), (ac + bf)() + (ad + bg)() + (ae + bh)(7), (ac + bf)() + (ad + bg)(5) + (ae + bh)()) 5

6 Q. [0 pts] Utilities Davis is on his way to a final exam planning meeting. He is already running late (the meeting is starting now) and he s trying to determine whether he should wait for the bus or just walk. It takes 20 minutes to get to ory Hall by walking, and only 5 minutes to get to there by bus. The bus will either come in 0, 20, or 0 minutes, each with probability /. (a) [ pts] Davis hates being late; his utility for being late as a function of t, the number of minutes late he is, is { 0 : t 0 U D (t) = 2 t/5 : t > 0 What is the expected utility of each action? Should he wait for the bus or walk? EU(walk) = 2 20/5 = 6 EU(bus) = ( 2 (0+5)/5 2 (20+5)/5 2 (0+5)/5) = ) = 68/ = 56 ( Davis should walk. (b) [ pts] Pat is running late too. However, Pat reasons that once he s late, it doesn t matter how late he is. Therefore, his utility function is { 0 : t 0 U P (t) = 0 : t > 0 Moreover, Pat prefers riding the bus because it is more comfortable, so riding the bus incurs a utility bonus of 5. If Pat is deciding whether to take the bus or walk when the meeting is just starting, what are his expected utilities for each action? Should he take the bus or walk? Pat should take the bus. EU(walk) = 0 EU(bus) = = 5 (c) [2 pts] Give an example of a decreasing utility function in terms of time such that it will favor decisions that always minimize expected time to get to the meeting. U(t) = t. ny decreasing linear function of t is correct. (d) [2 pts] Give an example of a decreasing utility function in terms of time such that it will be risk-seeking; that is, a lottery with expected time of arrival t will be preferred to a guarantee of arrival time t. U(t) = t. ny decreasing function with a positive second derivative (concave up) is correct. 6

7 Q4. [8 pts] Farmland SP The animals in Farmland aren t getting along and the farmers have to assign them to different pens. To avoid fighting, animals of the same type cannot be in connected pens. Fortunately, the Farmland pens are connected in a tree structure. (a) [2 pts] onsider the following constraint diagram that shows six pens with lines indicating connected pens. The remaining domains for each pen are listed below each node. ull Goat Goat ull Duck Goat ull Duck ull Duck Goat Duck Goat fter assigning a bull to pen 5, enforce arc consistency on this SP considering only the directed arcs shown in the figure. What are the remaining values for each pen? Pen Values ull 2 Goat Duck, Goat 4 ull, Duck 5 ull 6 Duck, Goat (b) [2 pts] What is the computational complexity of solving general tree structured SPs with n nodes and d values in the domain? Give an answer of the form O( ). O(nd 2 ) (c) This True/False question is worth points. Leaving a question blank is worth 0 points. nswering incorrectly is worth points. (i) [ pt] [true or false] If root to leaf arcs are consistent on a general tree structured SP, assigning values to nodes from root to leaves will not back-track if a solution exists. True. ecause the arcs are consistent, there is a valid value not matter which parent value was assigned. (d) [ pts] Given animal types, what is the most number of pens a tree structure could have, such that the computational complexity to solve the tree SP is no greater than the computational complexity to solve a fully connected SP with 0 pens? 8. fully connected SP is O(d n ), while a tree structure is O(nd 2 ). The intent of this question was to show that you could have 8 nodes in a tree structure and that would be roughly same amount of computation as a fully connected problem with 0 nodes ( 0 = 8 2 ). Unfortunately, this question is poorly worded, because computational complexity doesn t quite work with specific values like this. 7

8 Q5. [6 pts] MDP Pacman is using MDPs to maximize his expected utility. In each environment: ˆ Pacman has the standard actions {North, East, South, West} unless blocked by an outer wall ˆ There is a reward of point when eating the dot (for example, in the grid below, R(, South, F ) = ) ˆ The game ends when the dot is eaten (a) onsider a the following grid where there is a single food pellet in the bottom right corner (F ). The discount factor is 0.5. There is no living reward. The states are simply the grid locations. D E F (i) [2 pts] What is the optimal policy for each state? State π(state) East or South East or South E D D E South East East (ii) [2 pts] What is the optimal value for the state of being in the upper left corner ()? Reminder: the discount factor is 0.5. V () = 0.25 k V() V() V() V(D) V(E) V(F) (iii) [2 pts] Using value iteration with the value of all states equal to zero at k=0, for which iteration k will V k () = V ()? k = (see above) 8

9 (b) onsider a new Pacman level D thate begins F with cherries in locations D and F. Landing on a grid position with cherries is worth 5 points and then the cherries at that position disappear. There is still one dot, worth point. The game still only ends when the dot is eaten. D D E F E (i) [2 pts] With no discount (γ = ) and a living reward of -, what is the optimal policy for the states in this level s state space? F State π(state) South East D, F herry =true East D, F herry =false North E, F herry =true East E, F herry =false West F West Larger state spaces with equivalent states and actions are possible too. For example with the state representation of (grid, D-cherry, F-cherry), there could be up to 24 different states, where all four with are the same, etc. (ii) [2 pts] With no discount (γ = ), what is the range of living reward values such that Pacman eats exactly one cherry when starting at position? Valid range for the living reward is (-2.5,-.25). Let x equal the living reward. The reward for eating zero cherries {,} is x + (one step plus food). The reward for eating exactly one cherry {,,D,} is x + 6 (three steps plus cherry plus food). The reward for eating two cherries {,,D,E,F,E,D,} is 7x + (seven steps plus two cherries plus food). x must be greater than -2.5 to make eating at least one cherry worth it (x + 6 > x + ). x must be less than -.25 to eat less than one cherry (x + 6 > 7x + ). (c) Quick reinforcement learning questions [PLESE WRITE LERLY]: (i) [ pt] What is the difference between value-iteration and TD-learning? Value iteration has explicity models for transitions and rewards, while TD-learning relies on active samples. (ii) [ pt] What is the difference between TD-learning and Q-learning? TD-learning stores and updates V(s) while Q-learning stores and updates Q(s,a). lso, Q-learning is able to learn quality policies despite random or suboptimal actions, while TD-learning values are affected by the actions taken. (iii) [ pt] What is the purpose of using a learning rate (α) during Q-learning? The learning rate allows us to average information from previous iterations with the current sample. It allows us to step towards a solution at an incremental rate. This allows us to incorporate random samples while moving away from poor initial estimates. (iv) [ pt] In value iteration, we store the value of each state. What do we store during approximate Q-learning? We update and store the weights associated with the features. (v) [2 pts] Give one advantage and one disadvantage of using approximate Q-learning rather than standard Q-learning. Pros: Feature representation scales to very large or infinite spaces; learning process generalizes from seen states to unseen states. ons: True Q may not be representable in the chosen form; learning may not converge; need to design feature functions. 9

10 Q6. [8 pts] ayes Nets (a) For the following graphs, explicitly state the minimum size set of edges that must be removed such that the corresponding independence relations are guaranteed to be true. Marked the removed edges with an X on the graphs. (i) [2 pts] D F E D (ii) [2 pts] D E F D, (EF OR ) (b) You re performing variable elimination over a ayes Net with variables,,, D, E. So far, you ve finished joining over (but not summing out), when you realize you ve lost the original ayes Net! Your current factors are f(), f(), f(, D), f(,,, D, E). Note: these are factors, NOT joint distributions. You don t know which variables are conditioned or unconditioned. (i) [2 pts] What s the smallest number of edges that could have been in the original ayes Net? Draw out one such ayes Net below. Number of edges = 5 The original ayes net must have had 5 factors, for each node. f() and f() must have corresponded to nodes and, and indicate that neither nor have any parents. f(, D), then, must correspond to node D, and indicates that D has only as a parent. Since there is only one factor left, f(,,, D, E), for the nodes and E, those two nodes must have been joined while you were joining. This implies two things: ) E must have had as a parent, and 2) every other node must have been a parent of either or E. The below figure is one possible solution that uses the fewest possible edges to satisfy the above. D E (ii) [2 pts] What s the largest number of edges that could have been in the original ayes Net? Draw out one such ayes Net below. 0

11 Number of edges = 8 The constraints are the same as outlined in part i). To maximize the number of edges, we make each of,, and D a parent of both and E, as opposed to a parent of one of them. The below figure is the only possible solution. D E

12 Q7. [0 pts] hameleon team of scientists from erkeley discover a rare species of chameleons. Each one can change its color to be blue or gold, once a day. The probability of colors on a certain day are determined solely by its color on the previous day. The team spends 5 days observing 0 chameleons changing color from day to day. chameleons color transitions are below. The recorded counts for the # of t+ t t = 0 t = t = 2 t = # of t+ = gold t = gold # of t+ = blue t = gold # of t+ = gold t = blue # of t+ = blue t = blue (a) [ pts] They suspect that this phenomenon obeys the stationarity assumption that is, the transition probabilites are actually the same between all the days. Estimate the transition probabilites P ( t+ t ) from the above simulation. P ( t+ t ) P ( t+ = gold t = gold) 0/25 = 2/5 P ( t+ = blue t = gold) 5/25 = /5 P ( t+ = gold t = blue) 0/5 = 2/ P ( t+ = blue t = blue) 5/25 = / To solve this problem, find the total number of chameleons that were gold ( = 25) and then split it into those that turned gold (8 + 2 = 0) and those that turned blue (7 + 8 = 5). Normalizing yields 0/25 and 5/25 for the first two probabilies. Repeat for the chameleons that were blue. One common mistake was incorrectly normalizing of the probability table (e.g. dividing by 40 instead of 25). nother was to use only the transitions on t= and t= to get 0.2, 0.8, 0.8, 0.2, which fails to account for the other observed transitions on t=0 and t=2. (b) [2 pts] Further scientific tests determine that these chameleons are, in fact, immortal. s a result, they want to determine the distribution of a chameleon s colors over an infinite amount of time. Given the estimated transition probabilities, what is the steady state distribution for P ( )? P ( ) P ( = gold) 0/9 P ( = blue) 9/9 Let g = P ( = gold) and b = P ( = blue). g = b = 5 g = 2 b = g = 0 9 b ombining the two: g + b = 2 5 b + b = 9 9 b = = b = 9 9 = g = 0 9 2

13 The chameleons, realizing that these tests are being performed, decide to hide. The scientists can no longer observe them directly, but they can observe the bugs that one particular chameleon likes to eat. They know that the chameleon s color influences the probability that it will eat some fraction of a nest. The scientists will observe the size of the nests twice per day: once in the morning, before the chameleon eats, and once in the evening, after the chameleon eats. Every day, the chameleon moves on to a new nest. (c) [ pt] Draw a DN using the variables t, t+, M t, M t+, E t, and E t+. refers to the color of the chameleon, M is the size of a nest in the morning, and E is the size of that nest in the evening. t t+ M t E t M t+ E t+ When the chameleon is blue, it eats half of the bugs in the chosen nest with probability /2, one-third of the bugs with probability /4, and two-thirds of the bugs with probability /4. When the chameleon is gold, it eats one-third, half, or two-thirds of the bugs, each with probability /. (d) [4 pts] You would like to use particle filtering to guess the chameleon s color based on the observations of M and E. You observe the following population sizes: M = 24, E = 2, M 2 = 6, and E 2 = 24. Fill in the following tables with the weights you would assign to particles in each state at each time step. State at t = Weight lue 2 Gold State at t = 2 Weight lue 4 Gold The weights in HMM particle filtering are exactly equal to P (emission parents of emission). In this problem, the change is that the emission is dependent on an addiitonal parameter, the number of morning bugs. For the blue state at t=, the weight is equal to P (E = 2 M = 24, = lue) = /2. One extra step that was commonly made was to normalize the weights afterwards to or some other number; this is extraneous as the resample step of particle filtering only depends on the relative (not absolute) weights of the particles.

14 Q8. [0 pts] Perceptron We would like to use a perceptron to train a classifier for datasets with 2 features per point and labels + or -. onsider the following labeled training data: Features Label (x, x 2 ) y (-,2) (,-) - (,2) - (,) (a) [2 pts] Our two perceptron weights have been initialized to w = 2 and w 2 = 2. fter processing the first point with the perceptron algorithm, what will be the updated values for these weights? For the first point, y = g(w x + w 2 x 2 ) = g( ) = g( 5) =, which is incorrectly classified. To updathe weights, we add the first data point: w = 2 + ( ) = and w 2 = = 0. (b) [2 pts] fter how many steps will the perceptron algorithm converge? Write never if it will never converge. Note: one steps means processing one point. Points are processed in order and then repeated, until convergence. The data is not seperable, so it will never converge. (c) Instead of the standard perceptron algorithm, we decide to treat the perceptron as a single node neural network and update the weights using gradient descent on the loss function. The loss function for one data point is Loss(y, y ) = (y y ) 2, where y is the training label for a given point and y is the output of our single node network for that point. (i) [ pts] Given a general activation function g(z) and its derivative g (z), what is the derivative of the loss function with respect to w in terms of g, g, y, x, x 2, w, and w 2? Loss w = 2(g(w x + w 2 x 2 ) y )g (w x + w 2 x 2 )x (ii) [2 pts] For this question, the specific activation function that we will use is: g(z) = if z 0 and = if z < 0 Given the following gradient descent equation to update the weights given a single data point. With initial weights of w = 2 and w 2 = 2, what are the updated weights after processing the first point? Gradient descent update equation: w i = w i α Loss w ecause the gradient of g is zero, the weights will stay w = 2 and w 2 = 2. (iii) [ pt] What is the most critical problem with this gradient descent training process with that activation function? The gradient of that activation function is zero, so the weights will not update. 4

15 THIS PGE IS INTENTIONLLY LEFT LNK

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Fall 2018 Introduction to Artificial Intelligence Practice Final You have approximately 2 hours 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 88 Fall 208 Introduction to Artificial Intelligence Practice Final You have approximately 2 hours 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

More information

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes. CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers

More information

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Fall 2015 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

More information

Midterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015

Midterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015 S 88 Spring 205 Introduction to rtificial Intelligence Midterm 2 V ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Spring 2017 Introduction to Artificial Intelligence Midterm V2 You have approximately 80 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark

More information

Introduction to Spring 2009 Artificial Intelligence Midterm Exam

Introduction to Spring 2009 Artificial Intelligence Midterm Exam S 188 Introduction to Spring 009 rtificial Intelligence Midterm Exam INSTRUTINS You have 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators

More information

ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Summer 2015 Introduction to Artificial Intelligence Midterm 2 ˆ You have approximately 80 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

More information

CS 188 Fall Introduction to Artificial Intelligence Midterm 2

CS 188 Fall Introduction to Artificial Intelligence Midterm 2 CS 188 Fall 2013 Introduction to rtificial Intelligence Midterm 2 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use

More information

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes. CS 188 Spring 2013 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 1 hour and 50 minutes. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use

More information

Final. CS 188 Fall Introduction to Artificial Intelligence

Final. CS 188 Fall Introduction to Artificial Intelligence S 188 Fall 2012 Introduction to rtificial Intelligence Final You have approximately 3 hours. The exam is closed book, closed notes except your three one-page crib sheets. Please use non-programmable calculators

More information

CS221 Practice Midterm

CS221 Practice Midterm CS221 Practice Midterm Autumn 2012 1 ther Midterms The following pages are excerpts from similar classes midterms. The content is similar to what we ve been covering this quarter, so that it should be

More information

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes. CS 188 Spring 2013 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 1 hour and 50 minutes. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use

More information

Introduction to Fall 2009 Artificial Intelligence Final Exam

Introduction to Fall 2009 Artificial Intelligence Final Exam CS 188 Introduction to Fall 2009 Artificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a two-page crib sheet. Please use non-programmable calculators

More information

Introduction to Fall 2011 Artificial Intelligence Final Exam

Introduction to Fall 2011 Artificial Intelligence Final Exam CS 188 Introduction to Fall 2011 rtificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except two pages of crib sheets. Please use non-programmable calculators

More information

Midterm II. Introduction to Artificial Intelligence. CS 188 Fall ˆ You have approximately 3 hours.

Midterm II. Introduction to Artificial Intelligence. CS 188 Fall ˆ You have approximately 3 hours. CS 188 Fall 2012 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 3 hours. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use non-programmable

More information

Midterm. Introduction to Artificial Intelligence. CS 188 Summer You have approximately 2 hours and 50 minutes.

Midterm. Introduction to Artificial Intelligence. CS 188 Summer You have approximately 2 hours and 50 minutes. CS 188 Summer 2014 Introduction to Artificial Intelligence Midterm You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your one-page crib sheet. Mark your answers

More information

Introduction to Artificial Intelligence Midterm 2. CS 188 Spring You have approximately 2 hours and 50 minutes.

Introduction to Artificial Intelligence Midterm 2. CS 188 Spring You have approximately 2 hours and 50 minutes. CS 188 Spring 2014 Introduction to Artificial Intelligence Midterm 2 You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers

More information

Final Exam, Fall 2002

Final Exam, Fall 2002 15-781 Final Exam, Fall 22 1. Write your name and your andrew email address below. Name: Andrew ID: 2. There should be 17 pages in this exam (excluding this cover sheet). 3. If you need more room to work

More information

Name: UW CSE 473 Final Exam, Fall 2014

Name: UW CSE 473 Final Exam, Fall 2014 P1 P6 Instructions Please answer clearly and succinctly. If an explanation is requested, think carefully before writing. Points may be removed for rambling answers. If a question is unclear or ambiguous,

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Spring 2017 Introduction to Artificial Intelligence Midterm V2 You have approximately 80 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark

More information

Midterm, Fall 2003

Midterm, Fall 2003 5-78 Midterm, Fall 2003 YOUR ANDREW USERID IN CAPITAL LETTERS: YOUR NAME: There are 9 questions. The ninth may be more time-consuming and is worth only three points, so do not attempt 9 unless you are

More information

Final Exam December 12, 2017

Final Exam December 12, 2017 Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes

More information

Introduction to Spring 2009 Artificial Intelligence Midterm Solutions

Introduction to Spring 2009 Artificial Intelligence Midterm Solutions S 88 Introduction to Spring 009 rtificial Intelligence Midterm Solutions. (6 points) True/False For the following questions, a correct answer is worth points, no answer is worth point, and an incorrect

More information

Final. Introduction to Artificial Intelligence. CS 188 Summer 2014

Final. Introduction to Artificial Intelligence. CS 188 Summer 2014 S 188 Summer 2014 Introduction to rtificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers ON

More information

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes. CS 188 Spring 2013 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except a three-page crib sheet. Please use non-programmable

More information

Final Exam December 12, 2017

Final Exam December 12, 2017 Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes

More information

CS 188 Fall Introduction to Artificial Intelligence Midterm 1

CS 188 Fall Introduction to Artificial Intelligence Midterm 1 CS 88 Fall 207 Introduction to Artificial Intelligence Midterm You have approximately 0 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark your

More information

Introduction to Fall 2008 Artificial Intelligence Midterm Exam

Introduction to Fall 2008 Artificial Intelligence Midterm Exam CS 188 Introduction to Fall 2008 Artificial Intelligence Midterm Exam INSTRUCTIONS You have 80 minutes. 70 points total. Don t panic! The exam is closed book, closed notes except a one-page crib sheet,

More information

CS 188 Fall Introduction to Artificial Intelligence Midterm 2

CS 188 Fall Introduction to Artificial Intelligence Midterm 2 CS 188 Fall 2013 Introduction to rtificial Intelligence Midterm 2 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use

More information

Introduction to Spring 2006 Artificial Intelligence Practice Final

Introduction to Spring 2006 Artificial Intelligence Practice Final NAME: SID#: Login: Sec: 1 CS 188 Introduction to Spring 2006 Artificial Intelligence Practice Final You have 180 minutes. The exam is open-book, open-notes, no electronics other than basic calculators.

More information

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes Name: Roll Number: Please read the following instructions carefully Ø Calculators are allowed. However, laptops or mobile phones are not

More information

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20. 10-601 Machine Learning, Midterm Exam: Spring 2008 Please put your name on this cover sheet If you need more room to work out your answer to a question, use the back of the page and clearly mark on the

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

Section Handout 5 Solutions

Section Handout 5 Solutions CS 188 Spring 2019 Section Handout 5 Solutions Approximate Q-Learning With feature vectors, we can treat values of states and q-states as linear value functions: V (s) = w 1 f 1 (s) + w 2 f 2 (s) +...

More information

CS 4100 // artificial intelligence. Recap/midterm review!

CS 4100 // artificial intelligence. Recap/midterm review! CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks

More information

Final. CS 188 Fall Introduction to Artificial Intelligence

Final. CS 188 Fall Introduction to Artificial Intelligence CS 188 Fall 2012 Introduction to Artificial Intelligence Final You have approximately 3 hours. The exam is closed book, closed notes except your three one-page crib sheets. Please use non-programmable

More information

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007. Spring 2007 / Page 1 Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007. Don t panic. Be sure to write your name and student ID number on every page of the exam. The only materials

More information

Final. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 2 hours and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 2 hours and 50 minutes. S 188 Fall 2013 Introduction to rtificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your one-page crib sheet. Please use non-programmable

More information

Andrew/CS ID: Midterm Solutions, Fall 2006

Andrew/CS ID: Midterm Solutions, Fall 2006 Name: Andrew/CS ID: 15-780 Midterm Solutions, Fall 2006 November 15, 2006 Place your name and your andrew/cs email address on the front page. The exam is open-book, open-notes, no electronics other than

More information

Final Exam. CS 188 Fall Introduction to Artificial Intelligence

Final Exam. CS 188 Fall Introduction to Artificial Intelligence CS 188 Fall 2018 Introduction to rtificial Intelligence Final Exam You have 180 minutes. The time will be projected at the front of the room. You may not leave during the last 10 minutes of the exam. Do

More information

CS 188 Introduction to AI Fall 2005 Stuart Russell Final

CS 188 Introduction to AI Fall 2005 Stuart Russell Final NAME: SID#: Section: 1 CS 188 Introduction to AI all 2005 Stuart Russell inal You have 2 hours and 50 minutes. he exam is open-book, open-notes. 100 points total. Panic not. Mark your answers ON HE EXAM

More information

Midterm II. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 3 hours.

Midterm II. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 3 hours. CS 188 Fall 2012 Introduction to Artificial Intelligence Midterm II You have approximately 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators

More information

Machine Learning, Midterm Exam

Machine Learning, Midterm Exam 10-601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv Bar-Joseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have

More information

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012 CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline

More information

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet. CS 189 Spring 013 Introduction to Machine Learning Final You have 3 hours for the exam. The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet. Please

More information

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability Due: Thursday 10/15 in 283 Soda Drop Box by 11:59pm (no slip days) Policy: Can be solved in groups (acknowledge collaborators)

More information

Midterm Examination CS540: Introduction to Artificial Intelligence

Midterm Examination CS540: Introduction to Artificial Intelligence Midterm Examination CS540: Introduction to Artificial Intelligence November 1, 2005 Instructor: Jerry Zhu CLOSED BOOK (One letter-size notes allowed. Turn it in with the exam) LAST (FAMILY) NAME: FIRST

More information

Be able to define the following terms and answer basic questions about them:

Be able to define the following terms and answer basic questions about them: CS440/ECE448 Section Q Fall 2017 Final Review Be able to define the following terms and answer basic questions about them: Probability o Random variables, axioms of probability o Joint, marginal, conditional

More information

CS 570: Machine Learning Seminar. Fall 2016

CS 570: Machine Learning Seminar. Fall 2016 CS 570: Machine Learning Seminar Fall 2016 Class Information Class web page: http://web.cecs.pdx.edu/~mm/mlseminar2016-2017/fall2016/ Class mailing list: cs570@cs.pdx.edu My office hours: T,Th, 2-3pm or

More information

CSE 546 Final Exam, Autumn 2013

CSE 546 Final Exam, Autumn 2013 CSE 546 Final Exam, Autumn 0. Personal info: Name: Student ID: E-mail address:. There should be 5 numbered pages in this exam (including this cover sheet).. You can use any material you brought: any book,

More information

Do not tear exam apart!

Do not tear exam apart! 6.036: Final Exam: Fall 2017 Do not tear exam apart! This is a closed book exam. Calculators not permitted. Useful formulas on page 1. The problems are not necessarily in any order of di culty. Record

More information

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so. CS 89 Spring 07 Introduction to Machine Learning Midterm Please do not open the exam before you are instructed to do so. The exam is closed book, closed notes except your one-page cheat sheet. Electronic

More information

CS540 ANSWER SHEET

CS540 ANSWER SHEET CS540 ANSWER SHEET Name Email 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 1 2 Final Examination CS540-1: Introduction to Artificial Intelligence Fall 2016 20 questions, 5 points

More information

Midterm exam CS 189/289, Fall 2015

Midterm exam CS 189/289, Fall 2015 Midterm exam CS 189/289, Fall 2015 You have 80 minutes for the exam. Total 100 points: 1. True/False: 36 points (18 questions, 2 points each). 2. Multiple-choice questions: 24 points (8 questions, 3 points

More information

CSC321 Lecture 22: Q-Learning

CSC321 Lecture 22: Q-Learning CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize

More information

ARTIFICIAL INTELLIGENCE. Reinforcement learning

ARTIFICIAL INTELLIGENCE. Reinforcement learning INFOB2KI 2018-2019 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Reinforcement learning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Machine Learning Practice Page 2 of 2 10/28/13

Machine Learning Practice Page 2 of 2 10/28/13 Machine Learning 10-701 Practice Page 2 of 2 10/28/13 1. True or False Please give an explanation for your answer, this is worth 1 pt/question. (a) (2 points) No classifier can do better than a naive Bayes

More information

Final Exam. 1 True or False (15 Points)

Final Exam. 1 True or False (15 Points) 10-606 Final Exam Submit by Oct. 16, 2017 11:59pm EST Please submit early, and update your submission if you want to make changes. Do not wait to the last minute to submit: we reserve the right not to

More information

Midterm. Introduction to Machine Learning. CS 189 Spring You have 1 hour 20 minutes for the exam.

Midterm. Introduction to Machine Learning. CS 189 Spring You have 1 hour 20 minutes for the exam. CS 189 Spring 2013 Introduction to Machine Learning Midterm You have 1 hour 20 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. Please use non-programmable calculators

More information

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability Due: Thursday 10/15 in 283 Soda Drop Box by 11:59pm (no slip days) Policy: Can be solved in groups (acknowledge collaborators)

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 12: Probability 3/2/2011 Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 Announcements P3 due on Monday (3/7) at 4:59pm W3 going out

More information

Sample questions for Fundamentals of Machine Learning 2018

Sample questions for Fundamentals of Machine Learning 2018 Sample questions for Fundamentals of Machine Learning 2018 Teacher: Mohammad Emtiyaz Khan A few important informations: In the final exam, no electronic devices are allowed except a calculator. Make sure

More information

10-701/ Machine Learning - Midterm Exam, Fall 2010

10-701/ Machine Learning - Midterm Exam, Fall 2010 10-701/15-781 Machine Learning - Midterm Exam, Fall 2010 Aarti Singh Carnegie Mellon University 1. Personal info: Name: Andrew account: E-mail address: 2. There should be 15 numbered pages in this exam

More information

Machine Learning, Fall 2009: Midterm

Machine Learning, Fall 2009: Midterm 10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all

More information

CS229 Supplemental Lecture notes

CS229 Supplemental Lecture notes CS229 Supplemental Lecture notes John Duchi 1 Boosting We have seen so far how to solve classification (and other) problems when we have a data representation already chosen. We now talk about a procedure,

More information

Introduction to Machine Learning Midterm, Tues April 8

Introduction to Machine Learning Midterm, Tues April 8 Introduction to Machine Learning 10-701 Midterm, Tues April 8 [1 point] Name: Andrew ID: Instructions: You are allowed a (two-sided) sheet of notes. Exam ends at 2:45pm Take a deep breath and don t spend

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 189 Fall 2015 Introduction to Machine Learning Final Please do not turn over the page before you are instructed to do so. You have 2 hours and 50 minutes. Please write your initials on the top-right

More information

Final Exam, Spring 2006

Final Exam, Spring 2006 070 Final Exam, Spring 2006. Write your name and your email address below. Name: Andrew account: 2. There should be 22 numbered pages in this exam (including this cover sheet). 3. You may use any and all

More information

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

Machine Learning, Midterm Exam: Spring 2009 SOLUTION 10-601 Machine Learning, Midterm Exam: Spring 2009 SOLUTION March 4, 2009 Please put your name at the top of the table below. If you need more room to work out your answer to a question, use the back of

More information

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam: Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,

More information

CS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions

CS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions CS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions 1. Neural Networks: a. There are 2 2n distinct Boolean functions over n inputs. Thus there are 16 distinct Boolean functions

More information

Mock Exam Künstliche Intelligenz-1. Different problems test different skills and knowledge, so do not get stuck on one problem.

Mock Exam Künstliche Intelligenz-1. Different problems test different skills and knowledge, so do not get stuck on one problem. Name: Matriculation Number: Mock Exam Künstliche Intelligenz-1 January 9., 2017 You have one hour(sharp) for the test; Write the solutions to the sheet. The estimated time for solving this exam is 53 minutes,

More information

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE March 28, 2012 The exam is closed book. You are allowed a double sided one page cheat sheet. Answer the questions in the spaces provided on

More information

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating? CSE 190: Reinforcement Learning: An Introduction Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to

More information

Chapter 6. Net or Unbalanced Forces. Copyright 2011 NSTA. All rights reserved. For more information, go to

Chapter 6. Net or Unbalanced Forces. Copyright 2011 NSTA. All rights reserved. For more information, go to Chapter 6 Net or Unbalanced Forces Changes in Motion and What Causes Them Teacher Guide to 6.1/6.2 Objectives: The students will be able to explain that the changes in motion referred to in Newton s first

More information

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run

More information

Figure 1: Bayes Net. (a) (2 points) List all independence and conditional independence relationships implied by this Bayes net.

Figure 1: Bayes Net. (a) (2 points) List all independence and conditional independence relationships implied by this Bayes net. 1 Bayes Nets Unfortunately during spring due to illness and allergies, Billy is unable to distinguish the cause (X) of his symptoms which could be: coughing (C), sneezing (S), and temperature (T). If he

More information

Midterm: CS 6375 Spring 2015 Solutions

Midterm: CS 6375 Spring 2015 Solutions Midterm: CS 6375 Spring 2015 Solutions The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for an

More information

Final Exam, Machine Learning, Spring 2009

Final Exam, Machine Learning, Spring 2009 Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3

More information

CITS4211 Mid-semester test 2011

CITS4211 Mid-semester test 2011 CITS4211 Mid-semester test 2011 Fifty minutes, answer all four questions, total marks 60 Question 1. (12 marks) Briefly describe the principles, operation, and performance issues of iterative deepening.

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Introduction to Fall 2008 Artificial Intelligence Final Exam

Introduction to Fall 2008 Artificial Intelligence Final Exam CS 188 Introduction to Fall 2008 Artificial Intelligence Final Exam INSTRUCTIONS You have 180 minutes. 100 points total. Don t panic! The exam is closed book, closed notes except a two-page crib sheet,

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

Reinforcement Learning: An Introduction

Reinforcement Learning: An Introduction Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is

More information

Final Examination CS540-2: Introduction to Artificial Intelligence

Final Examination CS540-2: Introduction to Artificial Intelligence Final Examination CS540-2: Introduction to Artificial Intelligence May 9, 2018 LAST NAME: SOLUTIONS FIRST NAME: Directions 1. This exam contains 33 questions worth a total of 100 points 2. Fill in your

More information

1. A Not So Random Walk

1. A Not So Random Walk CS88 Fall 208 Section 9: Midterm 2 Prep. A Not So Random Walk Pacman is trying to predict the position of a ghost, which he knows has the following transition graph: p q p q A B C Here, 0 < p < and 0

More information

CSE 473: Artificial Intelligence Spring 2014

CSE 473: Artificial Intelligence Spring 2014 CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld, Pieter Abbeel, Luke Zettelmoyer Outline Agents that

More information

Learning Tetris. 1 Tetris. February 3, 2009

Learning Tetris. 1 Tetris. February 3, 2009 Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are

More information

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING

More information

FINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE

FINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE FINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE You are allowed a two-page cheat sheet. You are also allowed to use a calculator. Answer the questions in the spaces provided on the question sheets.

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

Final Exam V1. CS 188 Fall Introduction to Artificial Intelligence

Final Exam V1. CS 188 Fall Introduction to Artificial Intelligence CS 88 Fall 7 Introduction to Artificial Intelligence Final Eam V You have approimatel 7 minutes. The eam is closed book, closed calculator, and closed notes ecept our three-page crib sheet. Mark our answers

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

CS188: Artificial Intelligence, Fall 2010 Written 3: Bayes Nets, VPI, and HMMs

CS188: Artificial Intelligence, Fall 2010 Written 3: Bayes Nets, VPI, and HMMs CS188: Artificial Intelligence, Fall 2010 Written 3: Bayes Nets, VPI, and HMMs Due: Tuesday 11/23 in 283 Soda Drop Box by 11:59pm (no slip days) Policy: Can be solved in groups (acknowledge collaborators)

More information

Introduction to Arti Intelligence

Introduction to Arti Intelligence Introduction to Arti Intelligence cial Lecture 4: Constraint satisfaction problems 1 / 48 Constraint satisfaction problems: Today Exploiting the representation of a state to accelerate search. Backtracking.

More information

Final exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007.

Final exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007. Fall 2007 / Page 1 Final exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007. Don t panic. Be sure to write your name and student ID number on every page of the exam. The only materials

More information

Be able to define the following terms and answer basic questions about them:

Be able to define the following terms and answer basic questions about them: CS440/ECE448 Fall 2016 Final Review Be able to define the following terms and answer basic questions about them: Probability o Random variables o Axioms of probability o Joint, marginal, conditional probability

More information